130 research outputs found

    10. Guidelines for Computer Testing

    Get PDF
    Testing by computer is big business. Many companies are offering software enabling a psychologist to test a client by seating him or her at a computer terminal and pressing Return. The software presents the instructions on the screen, guides the test taker through some sample items to see if the instructions are understood, and then presents the test, automatically recording the responses. After one or more tests have been completed, the equipment scores the responses, and delivers test scores. But it doesn\u27t stop there. It then continues by printing out a complete test interpretation in fairly well-constructed narrative prose. The prose often shows a few signs of having been pasted together out of standard phrases, sentences, and paragraphs, but then so do many reports written by real psychologists. The proliferation of testing systems and automated test interpreters has generated consternation among some clinical psychologists. Matarazzo (1983) cried Wolf in an editorial in Science, and went a little far, seeming to condemn all computerized testing. I replied (Green, 1983b) that there is much less concern about the computer giving the test than about the computer interpreting the test. In fact, a group at the Navy Personnel Research and Development Center in San Diego (McBride & Martin, 1983; Moreno, Wetzel, McBride, & Weiss, 1984) had just successfully transferred the Armed Services Vocational Aptitude Battery to the computer, with no major difficulties. The Navy group used Computerized Adaptive Testing (CAT), the most important advance in cognitive testing (Green, 1983a; Weiss, 1985). In a CAT, the computer chooses the next item to be administered on the basis of the responses to the previous items. This procedure requires a new kind of test theory-classical test theory is not adequate. The new theory is called item response theory (IRT), and is now quite well developed, although it is still new and cumbersome. Using IRT, a computer can readily tailor the test to each test taker. The Navy group has successfully used the technique to administer the Armed Services Vocational Aptitude Battery (ASVAB). It has been found that a conventional test can be replaced by an adaptive test with about half the items, at no loss of reliability or validity. For many test takers, a conventional test has a lot of wasted items- items that are too easy for the good students, items that are too hard for the poor students. If the items are chosen to be most informative about the individual test taker, a lot of time can be saved. Of course, this means developing an estimate of the test taker\u27s ability as the test progresses, and it implies many intermediate calculations, but the computer is good at that. An interesting by-product of CAT is that nearly everybody who takes it likes it. Such a test provides more success experiences than the lower half of the ability spectrum is used to, and does not seem to disconcert the high scorers. Also, the computer is responsive. As soon as an answer is input, another item appears on the screen; The computer is attending to the test taker in an active way that an answer sheet cannot emulate. Hardwicke and Yoes (1984) report that one recruit said, of the CAT version of the ASVAB, It\u27s faster, it\u27s funner, and it\u27s more easier. Although computerized administration seemed to be working well in the cognitive area, there was more concern about personality tests. The American Psychological Association began getting several calls each week from its members asking about, or complaining about computerized testing. Apparently, some guidelines were needed for the users and the developers of computer-based tests and assessments. We hoped to stimulate orderly, controlled growth in an important and volatile field. The Guidelines (APA, 1986; see Appendix) address the development, use, and technical evaluation of computerized tests and test interpretations. They emphasize personality tests and personality assessments, but are relevant to all computer testing. Why develop guidelines when we have just finished congratulating ourselves about the new joint Testing Standards (APA, AERA, & NCME, 1985)? Because the Testing Standards cover this situation only in a generic sort of way, and deserve amplification in particular details; especially computer-based assessments, that is, narrative interpretations. The new Guidelines are viewed as a special application of the new Testing Standards and as subordinate to them in case of any perceived conflict

    Book Reviews

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/45716/1/11336_2005_Article_BF02289725.pd

    String theory and the crisis of particle physics II or the ascent of metaphoric arguments

    Full text link
    This is a completely reformulated presentation of a previous paper with the same title; this time with a much stronger emphasis on conceptual aspects of string theory and a detailed review of its already more than four decades lasting history within a broader context, including some little-known details. Although there have been several books and essays on the sociological impact and its philosophical implications, there is yet no serious attempt to scrutinize its claims about particle physics using the powerful conceptual arsenal of contemporary local quantum physics. I decided to leave the previous first version on the arXiv because it may be interesting to the reader to notice the change of viewpoint and the reason behind it. Other reasons for preventing my first version to go into print and to rewrite it in such a way that its content complies with my different actual viewpoint can be found at the end of the article. The central message, contained in sections 5 and 6, is that string theory is not what string theorists think and claim it is. The widespread acceptance of a theory whose interpretation has been obtained by metaphoric reasoning had a corroding influence on the rest of particle physics theory as will be illustrated in several concrete cases. The work is dedicated to the memory of Juergen Ehlers with whom I shared many critical ideas, but their formulation in this essay is fully within my responsibility.Comment: A dedication and an epilog to the memory of Juergen Ehlers. Extension of the the last two sections, removal of typos and changes in formulation, 68 pages late

    Mitochondrial echoes of first settlement and genetic continuity in El Salvador

    Get PDF
    Background: From Paleo-Indian times to recent historical episodes, the Mesoamerican isthmus played an important role in the distribution and patterns of variability all around the double American continent. However, the amount of genetic information currently available on Central American continental populations is very scarce. In order to shed light on the role of Mesoamerica in the peopling of the New World, the present study focuses on the analysis of the mtDNA variation in a population sample from El Salvador. Methodology/Principal Findings: We have carried out DNA sequencing of the entire control region of the mitochondrial DNA (mtDNA) genome in 90 individuals from El Salvador. We have also compiled more than 3,985 control region profiles from the public domain and the literature in order to carry out inter-population comparisons. The results reveal a predominant Native American component in this region: by far, the most prevalent mtDNA haplogroup in this country (at ~90%) is A2, in contrast with other North, Meso- and South American populations. Haplogroup A2 shows a star-like phylogeny and is very diverse with a substantial proportion of mtDNAs (45%; sequence range 16090–16365) still unobserved in other American populations. Two different Bayesian approaches used to estimate admixture proportions in El Salvador shows that the majority of the mtDNAs observed come from North America. A preliminary founder analysis indicates that the settlement of El Salvador occurred about 13,400±5,200 Y.B.P.. The founder age of A2 in El Salvador is close to the overall age of A2 in America, which suggests that the colonization of this region occurred within a few thousand years of the initial expansion into the Americas. Conclusions/Significance: As a whole, the results are compatible with the hypothesis that today's A2 variability in El Salvador represents to a large extent the indigenous component of the region. Concordant with this hypothesis is also the observation of a very limited contribution from European and African women (~5%). This implies that the Atlantic slave trade had a very small demographic impact in El Salvador in contrast to its transformation of the gene pool in neighbouring populations from the Caribbean facade

    Message Passing for Complex Question Answering over Knowledge Graphs

    Get PDF
    Question answering over knowledge graphs (KGQA) has evolved from simple single-fact questions to complex questions that require graph traversal and aggregation. We propose a novel approach for complex KGQA that uses unsupervised message passing, which propagates confidence scores obtained by parsing an input question and matching terms in the knowledge graph to a set of possible answers. First, we identify entity, relationship, and class names mentioned in a natural language question, and map these to their counterparts in the graph. Then, the confidence scores of these mappings propagate through the graph structure to locate the answer entities. Finally, these are aggregated depending on the identified question type. This approach can be efficiently implemented as a series of sparse matrix multiplications mimicking joins over small local subgraphs. Our evaluation results show that the proposed approach outperforms the state-of-the-art on the LC-QuAD benchmark. Moreover, we show that the performance of the approach depends only on the quality of the question interpretation results, i.e., given a correct relevance score distribution, our approach always produces a correct answer ranking. Our error analysis reveals correct answers missing from the benchmark dataset and inconsistencies in the DBpedia knowledge graph. Finally, we provide a comprehensive evaluation of the proposed approach accompanied with an ablation study and an error analysis, which showcase the pitfalls for each of the question answering components in more detail.Comment: Accepted in CIKM 201

    Mapping carcass and meat quality QTL on Sus Scrofa chromosome 2 in commercial finishing pigs

    Get PDF
    Quantitative trait loci (QTL) affecting carcass and meat quality located on SSC2 were identified using variance component methods. A large number of traits involved in meat and carcass quality was detected in a commercial crossbred population: 1855 pigs sired by 17 boars from a synthetic line, which where homozygous (A/A) for IGF2. Using combined linkage and linkage disequilibrium mapping (LDLA), several QTL significantly affecting loin muscle mass, ham weight and ham muscles (outer ham and knuckle ham) and meat quality traits, such as Minolta-L* and -b*, ultimate pH and Japanese colour score were detected. These results agreed well with previous QTL-studies involving SSC2. Since our study is carried out on crossbreds, different QTL may be segregating in the parental lines. To address this question, we compared models with a single QTL-variance component with models allowing for separate sire and dam QTL-variance components. The same QTL were identified using a single QTL variance component model compared to a model allowing for separate variances with minor differences with respect to QTL location. However, the variance component method made it possible to detect QTL segregating in the paternal line (e.g. HAMB), the maternal lines (e.g. Ham) or in both (e.g. pHu). Combining association and linkage information among haplotypes improved slightly the significance of the QTL compared to an analysis using linkage information only

    ‘There is a Time to be Born and a Time to Die’ (Ecclesiastes 3:2a): Jewish Perspectives on Euthanasia

    Get PDF
    Reviewing the publications of prominent American rabbis who have (extensively) published on Jewish biomedical ethics, this article highlights Orthodox, Conservative and Reform opinions on a most pressing contemporary bioethical issue: euthanasia. Reviewing their opinions against the background of the halachic character of Jewish (biomedical) ethics, this article shows how from one traditional Jewish textual source diverse, even contradictory, opinions emerge through different interpretations. In this way, in the Jewish debate on euthanasia the specific methodology of Jewish (bio)ethical reasoning comes forward as well as a diversity of opinion within Judaism and its branches

    Epigenetic reprogramming at estrogen-receptor binding sites alters 3D chromatin landscape in endocrine-resistant breast cancer

    Get PDF
    Endocrine therapy resistance frequently develops in estrogen receptor positive (ER+) breast cancer, but the underlying molecular mechanisms are largely unknown. Here, we show that 3-dimensional (3D) chromatin interactions both within and between topologically associating domains (TADs) frequently change in ER+ endocrine-resistant breast cancer cells and that the differential interactions are enriched for resistance-associated genetic variants at CTCF-bound anchors. Ectopic chromatin interactions are preferentially enriched at active enhancers and promoters and ER binding sites, and are associated with altered expression of ER-regulated genes, consistent with dynamic remodelling of ER pathways accompanying the development of endocrine resistance. We observe that loss of 3D chromatin interactions often occurs coincidently with hypermethylation and loss of ER binding. Alterations in active A and inactive B chromosomal compartments are also associated with decreased ER binding and atypical interactions and gene expression. Together, our results suggest that 3D epigenome remodelling is a key mechanism underlying endocrine resistance in ER+ breast cancer
    corecore